241 research outputs found
Collision-aware Task Assignment for Multi-Robot Systems
We propose a novel formulation of the collision-aware task assignment (CATA)
problem and a decentralized auction-based algorithm to solve the problem with
optimality bound. Using a collision cone, we predict potential collisions and
introduce a binary decision variable into the local reward function for task
bidding. We further improve CATA by implementing a receding collision horizon
to address the stopping robot scenario, i.e. when robots are confined to their
task location and become static obstacles to other moving robots. The
auction-based algorithm encourages the robots to bid for tasks with collision
mitigation considerations. We validate the improved task assignment solution
with both simulation and experimental results, which show significant reduction
of overlapping paths as well as deadlocks
Decentralized collaborative transport of fabrics using micro-UAVs
Small unmanned aerial vehicles (UAVs) have generally little capacity to carry
payloads. Through collaboration, the UAVs can increase their joint payload
capacity and carry more significant loads. For maximum flexibility to dynamic
and unstructured environments and task demands, we propose a fully
decentralized control infrastructure based on a swarm-specific scripting
language, Buzz. In this paper, we describe the control infrastructure and use
it to compare two algorithms for collaborative transport: field potentials and
spring-damper. We test the performance of our approach with a fleet of
micro-UAVs, demonstrating the potential of decentralized control for
collaborative transport.Comment: Submitted to 2019 International Conference on Robotics and Automation
(ICRA). 6 page
Decentralized Connectivity-Preserving Deployment of Large-Scale Robot Swarms
We present a decentralized and scalable approach for deployment of a robot
swarm. Our approach tackles scenarios in which the swarm must reach multiple
spatially distributed targets, and enforce the constraint that the robot
network cannot be split. The basic idea behind our work is to construct a
logical tree topology over the physical network formed by the robots. The
logical tree acts as a backbone used by robots to enforce connectivity
constraints. We study and compare two algorithms to form the logical tree:
outwards and inwards. These algorithms differ in the order in which the robots
join the tree: the outwards algorithm starts at the tree root and grows towards
the targets, while the inwards algorithm proceeds in the opposite manner. Both
algorithms perform periodic reconfiguration, to prevent suboptimal topologies
from halting the growth of the tree. Our contributions are (i) The formulation
of the two algorithms; (ii) A comparison of the algorithms in extensive
physics-based simulations; (iii) A validation of our findings through
real-robot experiments.Comment: 8 pages, 8 figures, submitted to IROS 201
Modeling Perceptual Aliasing in SLAM via Discrete-Continuous Graphical Models
Perceptual aliasing is one of the main causes of failure for Simultaneous
Localization and Mapping (SLAM) systems operating in the wild. Perceptual
aliasing is the phenomenon where different places generate a similar visual
(or, in general, perceptual) footprint. This causes spurious measurements to be
fed to the SLAM estimator, which typically results in incorrect localization
and mapping results. The problem is exacerbated by the fact that those outliers
are highly correlated, in the sense that perceptual aliasing creates a large
number of mutually-consistent outliers. Another issue stems from the fact that
most state-of-the-art techniques rely on a given trajectory guess (e.g., from
odometry) to discern between inliers and outliers and this makes the resulting
pipeline brittle, since the accumulation of error may result in incorrect
choices and recovery from failures is far from trivial. This work provides a
unified framework to model perceptual aliasing in SLAM and provides practical
algorithms that can cope with outliers without relying on any initial guess. We
present two main contributions. The first is a Discrete-Continuous Graphical
Model (DC-GM) for SLAM: the continuous portion of the DC-GM captures the
standard SLAM problem, while the discrete portion describes the selection of
the outliers and models their correlation. The second contribution is a
semidefinite relaxation to perform inference in the DC-GM that returns
estimates with provable sub-optimality guarantees. Experimental results on
standard benchmarking datasets show that the proposed technique compares
favorably with state-of-the-art methods while not relying on an initial guess
for optimization.Comment: 13 pages, 14 figures, 1 tabl
VIR-SLAM: Visual, Inertial, and Ranging SLAM for single and multi-robot systems
Monocular cameras coupled with inertial measurements generally give high
performance visual inertial odometry. However, drift can be significant with
long trajectories, especially when the environment is visually challenging. In
this paper, we propose a system that leverages ultra-wideband ranging with one
static anchor placed in the environment to correct the accumulated error
whenever the anchor is visible. We also use this setup for collaborative SLAM:
different robots use mutual ranging (when available) and the common anchor to
estimate the transformation between each other, facilitating map fusion Our
system consists of two modules: a double layer ranging, visual, and inertial
odometry for single robots, and a transformation estimation module for
collaborative SLAM. We test our system on public datasets by simulating an
ultra-wideband sensor as well as on real robots. Experiments show our method
can outperform state-of-the-art visual-inertial odometry by more than 20%. For
visually challenging environments, our method works even the visual-inertial
odometry has significant drift Furthermore, we can compute the collaborative
SLAM transformation matrix at almost no extra computation cost
Real-Time Simultaneous Localization and Mapping with LiDAR intensity
We propose a novel real-time LiDAR intensity image-based simultaneous
localization and mapping method , which addresses the geometry degeneracy
problem in unstructured environments. Traditional LiDAR-based front-end
odometry mostly relies on geometric features such as points, lines and planes.
A lack of these features in the environment can lead to the failure of the
entire odometry system. To avoid this problem, we extract feature points from
the LiDAR-generated point cloud that match features identified in LiDAR
intensity images. We then use the extracted feature points to perform scan
registration and estimate the robot ego-movement. For the back-end, we jointly
optimize the distance between the corresponding feature points, and the point
to plane distance for planes identified in the map. In addition, we use the
features extracted from intensity images to detect loop closure candidates from
previous scans and perform pose graph optimization. Our experiments show that
our method can run in real time with high accuracy and works well with
illumination changes, low-texture, and unstructured environments
Accurate position tracking with a single UWB anchor
Accurate localization and tracking are a fundamental requirement for robotic
applications. Localization systems like GPS, optical tracking, simultaneous
localization and mapping (SLAM) are used for daily life activities, research,
and commercial applications. Ultra-wideband (UWB) technology provides another
venue to accurately locate devices both indoors and outdoors. In this paper, we
study a localization solution with a single UWB anchor, instead of the
traditional multi-anchor setup. Besides the challenge of a single UWB ranging
source, the only other sensor we require is a low-cost 9 DoF inertial
measurement unit (IMU). Under such a configuration, we propose continuous
monitoring of UWB range changes to estimate the robot speed when moving on a
line. Combining speed estimation with orientation estimation from the IMU
sensor, the system becomes temporally observable. We use an Extended Kalman
Filter (EKF) to estimate the pose of a robot. With our solution, we can
effectively correct the accumulated error and maintain accurate tracking of a
moving robot.Comment: Accepted by ICRA202
CAPRICORN: Communication Aware Place Recognition using Interpretable Constellations of Objects in Robot Networks
Using multiple robots for exploring and mapping environments can provide
improved robustness and performance, but it can be difficult to implement. In
particular, limited communication bandwidth is a considerable constraint when a
robot needs to determine if it has visited a location that was previously
explored by another robot, as it requires for robots to share descriptions of
places they have visited. One way to compress this description is to use
constellations, groups of 3D points that correspond to the estimate of a set of
relative object positions. Constellations maintain the same pattern from
different viewpoints and can be robust to illumination changes or dynamic
elements. We present a method to extract from these constellations compact
spatial and semantic descriptors of the objects in a scene. We use this
representation in a 2-step decentralized loop closure verification: first, we
distribute the compact semantic descriptors to determine which other robots
might have seen scenes with similar objects; then we query matching robots with
the full constellation to validate the match using geometric information. The
proposed method requires less memory, is more interpretable than global image
descriptors, and could be useful for other tasks and interactions with the
environment. We validate our system's performance on a TUM RGB-D SLAM sequence
and show its benefits in terms of bandwidth requirements.Comment: 8 pages, 6 figures, 1 table. 2020 IEEE International Conference on
Robotics and Automation (ICRA
- …